Important: Red Hat Gluster Storage security, bug fix, and enhancement update

Synopsis

Important: Red Hat Gluster Storage security, bug fix, and enhancement update

Type/Severity

Security Advisory: Important

Topic

Updated glusterfs packages that fix multiple security issues and bugs, and add various enhancements are now available for Red Hat Gluster Storage 3.4 on Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

GlusterFS is a key building block of Red Hat Gluster Storage. It is based on a stackable user-space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnections into one large, parallel network file system.

Security Fix(es):

  • glusterfs: Unsanitized file names in debug/io-stats translator can allow remote attackers to execute arbitrary code (CVE-2018-10904)
  • glusterfs: Stack-based buffer overflow in server-rpc-fops.c allows remote attackers to execute arbitrary code (CVE-2018-10907)
  • glusterfs: I/O to arbitrary devices on storage server (CVE-2018-10923)
  • glusterfs: Device files can be created in arbitrary locations (CVE-2018-10926)
  • glusterfs: File status information leak and denial of service (CVE-2018-10927)
  • glusterfs: Improper resolution of symlinks allows for privilege escalation (CVE-2018-10928)
  • glusterfs: Arbitrary file creation on storage server allows for execution of arbitrary code (CVE-2018-10929)
  • glusterfs: Files can be renamed outside volume (CVE-2018-10930)
  • glusterfs: Improper deserialization in dict.c:dict_unserialize() can allow attackers to read arbitrary memory (CVE-2018-10911)
  • glusterfs: remote denial of service of gluster volumes via posix_get_file_contents function in posix-helpers.c (CVE-2018-10914)
  • glusterfs: Information Exposure in posix_get_file_contents function in posix-helpers.c (CVE-2018-10913)

For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section.

Red Hat would like to thank Michael Hanselmann (hansmi.ch) for reporting these issues.

Additional Changes:

These updated glusterfs packages include numerous bug fixes and enhancements. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Gluster Storage 3.4 Release Notes for information on the most significant of these changes:

https://access.redhat.com/site/documentation/en-US/red_hat_gluster_storage/3.4/html/3.4_release_notes/

All users of Red Hat Gluster Storage are advised to upgrade to these updated packages, which provide numerous bug fixes and enhancements.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux Server 7 x86_64
  • Red Hat Virtualization 4 x86_64
  • Red Hat Gluster Storage Server for On-premise 3 for RHEL 7 x86_64

Fixes

  • BZ - 1118770 - DHT : If Directory creation is in progress and rename of that Directory comes from another mount point then after both operation few files are not accessible and not listed on mount and more than one Directory have same gfid
  • BZ - 1167789 - DHT: Rebalance- Misleading log messages from __dht_check_free_space function
  • BZ - 1186664 - AFR: 3-way-replication: gluster volume set cluster.quorum-count should validate max no. of brick count to accept
  • BZ - 1215556 - Disperse volume: rebalance and quotad crashed
  • BZ - 1226874 - nfs-ganesha: in case pcs cluster setup fails then nfs-ganesha process should not start
  • BZ - 1234884 - Selfheal on a volume stops at a particular point and does not resume for a long time
  • BZ - 1260479 - DHT:While removing the brick, rebalance is trying to migrate files to the brick which doesn't have space due to this migration is failing
  • BZ - 1262230 - [quorum]: Replace brick is happened when Quorum not met.
  • BZ - 1277924 - Though files are in split-brain able to perform writes to the file
  • BZ - 1282318 - DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on <old_filename> failed (File exists)'
  • BZ - 1282731 - Entry heal messages in glustershd.log while no entries shown in heal info
  • BZ - 1283045 - Index entries are not being purged in case of file does not exist
  • BZ - 1286092 - Duplicate files seen on mount point while trying to create files which are greater than the brick size
  • BZ - 1286820 - [GSS] [RFE] Addition of "summary" option in "gluster volume heal" command.
  • BZ - 1288115 - [RFE] Pass slave volume in geo-rep as read-only
  • BZ - 1293332 - [geo-rep+tiering]: Hot tier bricks changelogs reports rsync failure
  • BZ - 1293349 - AFR Can ignore the zero size files while checking for spli-brain
  • BZ - 1294412 - [RFE] : Start glusterd even when the glusterd is unable to resolve the bricks path.
  • BZ - 1299740 - [geo-rep]: On cascaded setup for every entry their is setattr recorded in changelogs of slave
  • BZ - 1301474 - [GSS]Intermittent file creation fail,while doing concurrent writes on distributed volume has more than 40 bricks
  • BZ - 1319271 - auth.allow and auth.reject not working host mentioned with hostnames/FQDN
  • BZ - 1324531 - [GSS] [RFE] Create trash directory only when its is enabled
  • BZ - 1330526 - adding brick to a single brick volume to convert to replica is not triggering self heal
  • BZ - 1333705 - gluster volume heal info "healed" and "heal-failed" showing wrong information
  • BZ - 1338693 - [geo-rep]: [Errno 16] Device or resource busy: '/tmp/gsyncd-aux-mount-5BA95I'
  • BZ - 1339054 - Need to improve remove-brick failure message when the brick process is down.
  • BZ - 1339765 - Permission denied errors in the brick logs
  • BZ - 1341190 - conservative merge happening on a x3 volume for a deleted file
  • BZ - 1342785 - [geo-rep]: Worker crashes with permission denied during hybrid crawl caused via replace brick
  • BZ - 1345828 - SAMBA-DHT : Rename ends up creates nested directories with same gfid
  • BZ - 1356454 - DHT: slow readdirp performance
  • BZ - 1360331 - default timeout of 5min not honored for analyzing split-brain files post setfattr replica.split-brain-heal-finalize
  • BZ - 1361209 - need to throw right error message when self heal deamon is disabled and user tried to trigger manual heal
  • BZ - 1369312 - [RFE] DHT performance improvements for directory operations
  • BZ - 1369420 - AVC denial message getting related to glusterd in the audit.log
  • BZ - 1375094 - [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
  • BZ - 1378371 - "ganesha.so cannot open" warning message in glusterd log in non ganesha setup.
  • BZ - 1384762 - glusterd status showing failed when it's stopped in RHEL7
  • BZ - 1384979 - glusterd crashed and core dumped
  • BZ - 1384983 - split-brain observed with arbiter & replica 3 volume.
  • BZ - 1388218 - Client stale file handle error in dht-linkfile.c under SPEC SFS 2014 VDA workload
  • BZ - 1392905 - Rebalance should skip the file if the file has hardlinks instead of failing
  • BZ - 1397798 - MTSH: multithreaded self heal hogs cpu consistently over 150%
  • BZ - 1401969 - Bringing down data bricks in cyclic order results in arbiter brick becoming the source for heal.
  • BZ - 1406363 - [GSS][RFE] Provide option to control heal load for disperse volume
  • BZ - 1408158 - IO is paused for minimum one and half minute when one of the EC volume hosted cluster node goes down.
  • BZ - 1408354 - [GSS] gluster fuse client losing connection to gluster volume frequently
  • BZ - 1409102 - [Arbiter] IO Failure and mount point inaccessible after killing a brick
  • BZ - 1410719 - [GSS] [RFE] glusterfs-ganesha package installation need to work when glusterfs process running
  • BZ - 1413005 - [Remove-brick] Lookup failed errors are seen in rebalance logs during rm -rf
  • BZ - 1413959 - [RFE] Need a way to resolve gfid split brains
  • BZ - 1414456 - [GSS]Entry heal pending for directories which has symlinks to a different replica set
  • BZ - 1419438 - gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
  • BZ - 1419807 - [Perf]: 25% regression on sequential reads on EC over SMB3
  • BZ - 1425681 - [Glusterd] Volume operations fail on a (tiered) volume because of a stale lock held by one of the nodes
  • BZ - 1426042 - performance/write-behind should respect window-size & trickling-writes should be configurable
  • BZ - 1436673 - Restore atime/mtime for symlinks and other non-regular files.
  • BZ - 1442983 - Unable to acquire lock for gluster volume leading to 'another transaction in progress' error
  • BZ - 1444820 - Script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh failing
  • BZ - 1446046 - glusterd: TLS verification fails when using intermediate CA instead of self-signed certificates
  • BZ - 1448334 - [GSS]glusterfind pre crashes with "UnicodeDecodeError: 'utf8' codec can't decode" error when the `--no-encode` is used
  • BZ - 1449638 - Poor write speed performance of fio test on distributed-disperse volume
  • BZ - 1449867 - [GSS] glusterd fails to start
  • BZ - 1452915 - healing fails with wrong error when one of the glusterd holds a lock
  • BZ - 1459101 - [GSS] low sequential write performance on distributed dispersed volume on RHGS 3.2
  • BZ - 1459895 - Brick Multiplexing: Gluster volume start force complains with command "Error : Request timed out" when there are multiple volumes
  • BZ - 1460639 - [Stress] : IO errored out with ENOTCONN.
  • BZ - 1460918 - [geo-rep]: Status shows ACTIVE for most workers in EC before it becomes the PASSIVE
  • BZ - 1461695 - glusterd crashed and core dumped, when the network interface is down
  • BZ - 1463112 - EC version not updating to latest post healing when another brick is down
  • BZ - 1463114 - [GSS][RFE] Log entry of files skipped/failed during rebalance operation
  • BZ - 1463592 - [Parallel-Readdir]Warning messages in client log saying 'parallel-readdir' is not recognized.
  • BZ - 1463964 - heal info shows root directory as "Possibly undergoing heal" when heal is pending and heal deamon is disabled
  • BZ - 1464150 - [GSS] Unable to delete snapshot because it's in use
  • BZ - 1464350 - [RFE] Posix xlator needs to reserve disk space to prevent the brick from getting full.
  • BZ - 1466122 - Event webhook should work with HTTPS urls
  • BZ - 1466129 - Add generated HMAC token in header for webhook calls
  • BZ - 1467536 - Seeing timer errors in the rebalance logs
  • BZ - 1468972 - [GSS][RFE] Improve geo-replication logging
  • BZ - 1470566 - [RFE] Support changing from distribute to replicate with no active client operations
  • BZ - 1470599 - log messages appear stating mdkir failed on the new brick while adding brick to increase replica count.
  • BZ - 1470967 - [GSS] geo-replication failed due to ENTRY failures on slave volume
  • BZ - 1472757 - Running sysbench on vm disk from plain distribute gluster volume causes disk corruption
  • BZ - 1474012 - [geo-rep]: Incorrect last sync "0" during hystory crawl after upgrade/stop-start
  • BZ - 1474745 - [RFE] Reserved port range for Gluster
  • BZ - 1475466 - [geo-rep]: Scheduler help needs correction for description of --no-color
  • BZ - 1475475 - [geo-rep]: Improve the output message to reflect the real failure with schedule_georep script
  • BZ - 1475779 - quota: directories doesn't get heal on newly added bricks when quota is full on sub-directory
  • BZ - 1475789 - As long as appends keep happening on a file healing never completes on a brick when another brick is brought down in between
  • BZ - 1476827 - scripts: invalid test in S32gluster_enable_shared_storage.sh
  • BZ - 1476876 - [geo-rep]: RSYNC throwing internal errors
  • BZ - 1477087 - [geo-rep] master worker crash with interrupted system call
  • BZ - 1477250 - Negative Test: glusterd crashes for some of the volume options if set at cluster level
  • BZ - 1478395 - Extreme Load from self-heal
  • BZ - 1479335 - [GSS]glusterfsd is reaching 1200% CPU utilization
  • BZ - 1480041 - zero byte files with null gfid getting created on the brick instead of directory.
  • BZ - 1480042 - More useful error - replace 'not optimal'
  • BZ - 1480188 - writes amplified on brick with gluster-block
  • BZ - 1482376 - IO errors on gluster-block device
  • BZ - 1482812 - [afr] split-brain observed on T files post hardlink and rename in x3 volume
  • BZ - 1483541 - [geo-rep]: Slave has more entries than Master in multiple hardlink/rename scenario
  • BZ - 1483730 - [GSS] glusterfsd (brick) process crashed
  • BZ - 1483828 - DHT: readdirp fails to read some directories.
  • BZ - 1484113 - [geo-rep+qr]: Crashes observed at slave from qr_lookup_sbk during rename/hardlink/rebalance cases
  • BZ - 1484446 - [GSS] [RFE] Control Gluster process/resource using cgroup through tunables
  • BZ - 1487495 - client-io-threads option not working for replicated volumes
  • BZ - 1488120 - Moving multiple temporary files to the same destination concurrently causes ESTALE error
  • BZ - 1489876 - cli must throw a warning to discourage use of x2 volume which will be deprecated
  • BZ - 1491785 - Poor write performance on gluster-block
  • BZ - 1492591 - [GSS] Error No such file or directory for new file writes
  • BZ - 1492782 - self-heal daemon stuck
  • BZ - 1493085 - Sharding sends all application sent fsyncs to the main shard file
  • BZ - 1495161 - [GSS] Few brick processes are consuming more memory after patching 3.2
  • BZ - 1498391 - [RFE] Changelog option in a gluster volume disables with no warning
  • BZ - 1498730 - The output of the "gluster help" command is difficult to read
  • BZ - 1499644 - Eager lock should be present for both metadata and data transactions
  • BZ - 1499784 - [Downstream Only] : Retain cli and scripts for nfs-ganesha integration
  • BZ - 1499865 - [RFE] Implement DISCARD FOP for EC
  • BZ - 1500704 - gfapi: API needed to set lk_owner
  • BZ - 1501013 - [fuse-bridge] - Make event-history option configurable and have it disabled by default.
  • BZ - 1501023 - Make choose-local configurable through `volume-set` command
  • BZ - 1501253 - [GSS]Issues in accessing renamed file from multiple clients
  • BZ - 1501345 - [QUOTA] man page of gluster should be updated to list quota commands
  • BZ - 1501885 - "replace-brick" operation on a distribute volume kills all the glustershd daemon process in a cluster
  • BZ - 1502812 - [GSS] Client segfaults when grepping $UUID.meta files on EC vol.
  • BZ - 1503167 - [Geo-rep]: Make changelog batch size configurable
  • BZ - 1503173 - [Geo-rep] Master and slave mounts are not accessible to take client profile info
  • BZ - 1503174 - [Geo-rep]: symlinks trigger faulty geo-replication state (rsnapshot usecase)
  • BZ - 1503244 - socket poller error in glusterd logs
  • BZ - 1504234 - [GSS] gluster volume status command is missing in man page
  • BZ - 1505363 - Brick Multiplexing: stale brick processes getting created and volume status shows brick as down(pkill glusterfsd glusterfs ,glusterd restart)
  • BZ - 1507361 - [GSS] glusterfsd processes consuming high memory on all gluster nodes from trusted pool
  • BZ - 1507394 - [GSS] Not able to create snapshot
  • BZ - 1508780 - [RHEL7] rebase RHGS 3.4.0 to upstream glusterfs-3.12.2
  • BZ - 1508999 - [Fuse Sub-dir] After performing add-brick on volume,doing rm -rf * on subdir mount point fails with "Transport endpoint is not connected"
  • BZ - 1509102 - In distribute volume after glusterd restart, brick goes offline
  • BZ - 1509191 - detach start does not kill the tierd
  • BZ - 1509810 - [Disperse] Implement open fd heal for disperse volume
  • BZ - 1509830 - Improve performance with xattrop update.
  • BZ - 1509833 - [Disperse] : Improve heal info command to handle obvious cases
  • BZ - 1510725 - [GSS] glustersfd (brick) process crashed
  • BZ - 1511766 - The number of bytes of the quota specified in version 3.7 or later is incorrect
  • BZ - 1511767 - After detach tier start glusterd log flooded with "0-transport: EPOLLERR - disconnecting now" messages
  • BZ - 1512496 - Not all files synced using geo-replication
  • BZ - 1512963 - [GSS] Writing data to file on gluster volume served by ctdb/samba causes bricks to crash
  • BZ - 1515051 - bug-1247563.t is failing on master
  • BZ - 1516249 - help for volume profile is not in man page
  • BZ - 1517463 - [bitrot] scrub ondemand reports it's start as success without additional detail
  • BZ - 1517987 - [GSS] high mem/cpu usage, brick processes not starting and ssl encryption issues while testing CRS scaling with multiplexing (500-800 vols)
  • BZ - 1518260 - EC DISCARD doesn't punch hole properly
  • BZ - 1519076 - glusterfs client crash when removing directories
  • BZ - 1519740 - [GSS]ganesha-gfapi log is filling at rate of 1gb/hr
  • BZ - 1520767 - 500% -600% CPU utitlisation when one brick is down in EC volume
  • BZ - 1522833 - high memory usage by glusterd on executing gluster volume set operations
  • BZ - 1523216 - fuse xlator uses block size and fragment size 128KB leading to rounding off in df output
  • BZ - 1527309 - entries not getting cleared post healing of softlinks (stale entries showing up in heal info)
  • BZ - 1528566 - Performance Drop observed when cluster.eager-lock turned on
  • BZ - 1528733 - memory leak: get-state leaking memory in small amounts
  • BZ - 1529072 - parallel-readdir = TRUE prevents directories listing
  • BZ - 1529451 - glusterd leaks memory when vol status is issued
  • BZ - 1530146 - dht_(f)xattrop does not implement migration checks
  • BZ - 1530325 - Brick multiplexing: glustershd fails to start on a volume force start after a brick is down
  • BZ - 1530512 - clean up port map on brick disconnect
  • BZ - 1530519 - disperse eager-lock degrades performance for file create workloads
  • BZ - 1531041 - Use after free in cli_cmd_volume_create_cbk
  • BZ - 1534253 - remove ExclusiveArch directive from SPEC file
  • BZ - 1534530 - spec: unpackaged files found for RHEL-7 client build
  • BZ - 1535281 - possible memleak in glusterfsd process with brick multiplexing on
  • BZ - 1535852 - glusterfind is extremely slow if there are lots of changes
  • BZ - 1537357 - [RFE] - get-state option should mark profiling enabled flag at volume level
  • BZ - 1538366 - [GSS] Git clone --bare --mirror of git bundle fails when cloning on gluster storage
  • BZ - 1539699 - tests/bugs/cli/bug-822830.t fails on Centos 7 and locally
  • BZ - 1540600 - glusterd fails to attach brick during restart of the node
  • BZ - 1540664 - Files are unavailable on the mount point
  • BZ - 1540908 - Do lock conflict check correctly for wait-list
  • BZ - 1540961 - The used space in the volume increases when the volume is expanded
  • BZ - 1541122 - Improve geo-rep pre-validation logs
  • BZ - 1541830 - Volume wrong size
  • BZ - 1541932 - A down brick is incorrectly considered to be online and makes the volume to be started without any brick available
  • BZ - 1543068 - [CIOT] : Gluster CLI says "io-threads : enabled" on existing volumes post upgrade.
  • BZ - 1543296 - After upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5) peer state went to peer rejected(conncted).
  • BZ - 1544382 - Geo-replication is faulty on latest RHEL7.5 Snapshot2.0
  • BZ - 1544451 - [GSS] log-level=ERROR mount option not working, W level messages rapidly filling up storage
  • BZ - 1544824 - [Ganesha] : Cluster creation fails on selinux enabled/enforced nodes.
  • BZ - 1544852 - build: glusterfs.spec %post ganesha is missing %{?rhel} test
  • BZ - 1545277 - Brick process crashed after upgrade from RHGS-3.3.1 async(7.4) to RHGS-3.4(7.5)
  • BZ - 1545486 - [RFE] Generic support of fuse sub dir export at RHGS
  • BZ - 1545523 - [GSS] AIX client failed to write to a temporarily file to gluster volume by gNFS.
  • BZ - 1545570 - DHT calls dht_lookup_everywhere for 1xn volumes
  • BZ - 1546075 - Hook up script for managing SELinux context on bricks failed to execute post volume creation
  • BZ - 1546717 - Removing directories from multiple clients throws ESTALE errors
  • BZ - 1546941 - [Rebalance] ENOSPC errors on few files in rebalance logs
  • BZ - 1546945 - [Rebalance] "Migrate file failed: <filepath>: failed to get xattr [No data available]" warnings in rebalance logs
  • BZ - 1546960 - Typo error in __dht_check_free_space function log message
  • BZ - 1547012 - Bricks getting assigned to different pids depending on whether brick path is IP or hostname based
  • BZ - 1547903 - Stale entries of snapshots need to be removed from /var/run/gluster/snaps
  • BZ - 1548337 - hitting EIO error when a brick is restarted in ecvolume
  • BZ - 1548829 - [BMux] : Stale brick processes on the nodes after vol deletion.
  • BZ - 1549023 - Observing continuous "disconnecting socket" error messages on client glusterd logs
  • BZ - 1550315 - [GSS] ACL settings on directories is different on newly added bricks compared to original bricks after rebalance completion
  • BZ - 1550474 - Don't display copyright and any upstream specific information in gluster --version
  • BZ - 1550771 - [GSS] Duplicate directory created on newly added bricks after rebalancing volume
  • BZ - 1550896 - No rollback of renames on succeeded subvols during failure
  • BZ - 1550918 - More than 330% CPU utilization by glusterfsd while IO in progress
  • BZ - 1550982 - After setting storage.reserve limits, df from client shows increased volume used space though the mount point is empty
  • BZ - 1550991 - fallocate created data set is crossing storage reserve space limits resulting 100% brick full
  • BZ - 1551186 - [Ganesha] Duplicate volume export entries in ganesha.conf causing volume unexport to fail
  • BZ - 1552360 - memory leak in pre-op in replicate volumes for every write
  • BZ - 1552414 - Take full lock on files in 3 way replication
  • BZ - 1552425 - Make afr_fsync a transaction
  • BZ - 1553677 - [Remove-brick] Many files were not migrated from the decommissioned bricks; commit results in data loss
  • BZ - 1554291 - When storage reserve limit is reached, appending data to an existing file throws EROFS error
  • BZ - 1554905 - Creating a replica 2 volume throws split brain possibility warning - which has a link to upstream Docs.
  • BZ - 1555261 - After a replace brick command, self-heal takes some time to start healing files on disperse volumes
  • BZ - 1556895 - [RHHI]Fuse mount crashed with only one VM running with its image on that volume
  • BZ - 1557297 - Pause/Resume of geo-replication with wrong user specified returns success
  • BZ - 1557365 - [RFE] DHT : Enable lookup-optimize by default
  • BZ - 1557551 - quota crawler fails w/ TLS enabled
  • BZ - 1558433 - vmcore generated due to discard file operation
  • BZ - 1558463 - Rebase redhat-release-server from RHEL-7.5
  • BZ - 1558515 - [RFE][RHEL7] update redhat-storage-server build for RHGS 3.4.0
  • BZ - 1558517 - [RFE] [RHEL7] product certificate update for RHEL 7.5
  • BZ - 1558948 - linux untar errors out at completion during disperse volume inservice upgrade
  • BZ - 1558989 - 60% regression on small-file creates from 3.3.1
  • BZ - 1558990 - 30% regression on small-file reads from 3.3.1
  • BZ - 1558991 - 19% regression on smallfile appends over
  • BZ - 1558993 - 60% regression in small-file deletes from 3.3.1
  • BZ - 1558994 - 47% regression in mkdir from 3.3.1
  • BZ - 1558995 - 30% regression on small-file rmdirs from 3.3.1
  • BZ - 1559084 - [EC] Read performance of EC volume exported over gNFS is significantly lower than write performance
  • BZ - 1559452 - Volume status inode is broken with brickmux
  • BZ - 1559788 - Remove use-compound-fops feature
  • BZ - 1559831 - [RHHI] FUSE mount crash while running one Engine VM on replicated volume
  • BZ - 1559884 - Linkto files visible in mount point
  • BZ - 1559886 - Brick process hung, and looks like a deadlock in inode locks
  • BZ - 1560955 - After performing remove-brick followed by add-brick operation, brick went offline state
  • BZ - 1561733 - Rebalance failures on a dispersed volume with lookup-optimize enabled
  • BZ - 1561999 - rm command hangs in fuse_request_send
  • BZ - 1562744 - [EC] slow heal speed on disperse volume after brick replacement
  • BZ - 1563692 - Linux kernel untar failed with "xz: (stdin): Read error: Invalid argument" immediate after add-brick
  • BZ - 1563804 - Client can create denial of service (DOS) conditions on server
  • BZ - 1565015 - [Ganesha] File Locking test is failing on ganesha v3 protocol
  • BZ - 1565119 - Rebalance on few nodes doesn't seem to complete - stuck at FUTEX_WAIT
  • BZ - 1565399 - [GSS] geo-rep in faulty session due to OSError: [Errno 95] Operation not supported
  • BZ - 1565577 - [geo-rep]: Lot of changelogs retries and "dict is null" errors in geo-rep logs
  • BZ - 1565962 - Disable features.selinux
  • BZ - 1566336 - [GSS] Pending heals are not getting completed in CNS environment
  • BZ - 1567001 - [Ganesha+EC] Bonnie failed with I/O error while crefi and parallel lookup were going on in parallel from 4 clients
  • BZ - 1567100 - "Directory selfheal failed: Unable to form layout " log messages seen on client
  • BZ - 1567110 - Make cluster.localtime-logging not to be visible in gluster v get
  • BZ - 1567899 - growing glusterd memory usage with connected RHGSWA
  • BZ - 1568297 - Disable choose-local in groups virt and gluster-block
  • BZ - 1568374 - timer: Possible race condition between gf_timer_* routines
  • BZ - 1568655 - [GSS] symbolic links to read-only filesystem causing geo-replication session to enter faulty state
  • BZ - 1568896 - [geo-rep]: geo-replication scheduler is failing due to unsuccessful umount
  • BZ - 1569457 - EIO errors on some operations when volume has mixed brick versions on a disperse volume
  • BZ - 1569490 - [geo-rep]: in-service upgrade fails, session in FAULTY state
  • BZ - 1569951 - Amends in volume profile option 'gluster-block'
  • BZ - 1570514 - [RFE] make RHGS version available with glusterfs-server package
  • BZ - 1570541 - [Ganesha] Ganesha enable command errors out while setting up ganesha on 4 node out of 5 node gluster cluster
  • BZ - 1570582 - Build is failed due to access rpc->refcount in wrong way in quota.c
  • BZ - 1570586 - Glusterd crashed on a few (master) nodes
  • BZ - 1571645 - Remove unused variable
  • BZ - 1572043 - [Geo-rep]: Status in ACTIVE/Created state
  • BZ - 1572075 - glusterfsd crashing because of RHGS WA?
  • BZ - 1572087 - Redundant synchronization in rename codepath for a single subvolume DHT
  • BZ - 1572570 - [GSS] Glusterfind process crashes with UnicodeDecodeError
  • BZ - 1572585 - Remove-brick failed on Distributed volume while rm -rf is in-progress
  • BZ - 1575539 - [GSS] Glusterd memory leaking in gf_gld_mt_linebuf
  • BZ - 1575555 - [GSS] Warning messages generated for the removal of extended attribute security.ima flodding client logs
  • BZ - 1575557 - [Ganesha] "Gluster nfs-ganesha enable" commands sometimes gives output as "failed" with "Unlocking failed" error messages ,even though cluster is up and healthy in backend
  • BZ - 1575840 - brick crash seen while creating and deleting two volumes in loop
  • BZ - 1575877 - [geo-rep]: Geo-rep scheduler fails
  • BZ - 1575895 - DHT Log flooding in mount log "key=trusted.glusterfs.dht.mds [Invalid argument]"
  • BZ - 1577051 - [Remove-brick+Rename] Failure count shows zero though there are file migration failures
  • BZ - 1578647 - If parallel-readdir is enabled, the readdir-optimize option even when it is set to on it behaves as off
  • BZ - 1579981 - When the customer tries to migrate an RHV 4.1 disk from one storage domain to another, the glusterfsd core dumps.
  • BZ - 1580120 - [Ganesha] glusterfs (posix-acl xlator layer) checks for "write permission" instead for "file owner" during open() when writing to a file
  • BZ - 1580344 - Remove EIO from the dht_inode_missing macro
  • BZ - 1581047 - [geo-rep+tiering]: Hot and Cold tier brick changelogs report rsync failure
  • BZ - 1581057 - writes succeed when only good brick is down in 1x3 volume
  • BZ - 1581184 - After creating and starting 601 volumes, self heal daemon went down and seeing continuous warning messages in glusterd log
  • BZ - 1581219 - centos regression fails for tests/bugs/replicate/bug-1292379.t
  • BZ - 1581231 - quota crawler not working unless lookup is done from mount
  • BZ - 1581553 - [distribute]: Excessive 'dict is null' errors in geo-rep logs
  • BZ - 1581647 - Brick process crashed immediate after volume start with force option
  • BZ - 1582066 - Inconsistent access permissions on directories after bringing back the down sub-volumes
  • BZ - 1582119 - 'custom extended attributes' set on a directory are not healed after bringing back the down sub-volumes
  • BZ - 1582417 - [Geo-rep]: Directory renames are not synced in hybrid crawl
  • BZ - 1583047 - changelog: Changelog is not capturing rename of files
  • BZ - 1588408 - Fops are sent to glusterd and uninitialized brick stack when client reconnects to brick
  • BZ - 1592666 - lookup not assigning gfid if file is not present in all bricks of replica
  • BZ - 1593865 - shd crash on startup
  • BZ - 1594658 - Block PVC fails to mount on Jenkins pod
  • BZ - 1597506 - Introduce database group profile (to be only applied for CNS)
  • BZ - 1597511 - introduce cluster.daemon-log-level option
  • BZ - 1597654 - "gluster vol heal <volname> info" is locked forever
  • BZ - 1597768 - br-state-check.t crashed while brick multiplex is enabled
  • BZ - 1598105 - core dump generated while doing file system operations
  • BZ - 1598356 - delay gluster-blockd start until all bricks comeup
  • BZ - 1598384 - [geo-rep]: [Errno 2] No such file or directory
  • BZ - 1599037 - [GSS] Cleanup stale (unusable) XSYNC changelogs.
  • BZ - 1599362 - memory leak in get-state when geo-replication session is configured
  • BZ - 1599823 - [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"
  • BZ - 1599998 - When reserve limits are reached, append on an existing file after truncate operation results to hang
  • BZ - 1600057 - crash on glusterfs_handle_brick_status of the glusterfsd
  • BZ - 1600790 - Segmentation fault while using gfapi while getting volume utilization
  • BZ - 1601245 - [Ganesha] Ganesha crashed in mdcache_alloc_and_check_handle while running bonnie and untars with parallel lookups
  • BZ - 1601298 - CVE-2018-10904 glusterfs: Unsanitized file names in debug/io-stats translator can allow remote attackers to execute arbitrary code
  • BZ - 1601314 - [geo-rep]: Geo-replication not syncing renamed symlink
  • BZ - 1601331 - dht: Crash seen in thread dht_dir_attr_heal
  • BZ - 1601642 - CVE-2018-10907 glusterfs: Stack-based buffer overflow in server-rpc-fops.c allows remote attackers to execute arbitrary code
  • BZ - 1601657 - CVE-2018-10911 glusterfs: Improper deserialization in dict.c:dict_unserialize() can allow attackers to read arbitrary memory
  • BZ - 1607617 - CVE-2018-10914 glusterfs: remote denial of service of gluster volumes via posix_get_file_contents function in posix-helpers.c
  • BZ - 1607618 - CVE-2018-10913 glusterfs: Information Exposure in posix_get_file_contents function in posix-helpers.c
  • BZ - 1608352 - glusterfsd process crashed in a multiplexed configuration during cleanup of a single brick-graph triggered by volume-stop.
  • BZ - 1609163 - Fuse mount of volume fails when gluster_shared_storage is enabled
  • BZ - 1609724 - brick (glusterfsd) crashed at in quota_lookup
  • BZ - 1610659 - CVE-2018-10923 glusterfs: I/O to arbitrary devices on storage server
  • BZ - 1611151 - turn off disperse-other-eager-lock by default to avoid performance hit on simultaneous lookups
  • BZ - 1612098 - Brick not coming up on a volume after rebooting the node
  • BZ - 1612658 - CVE-2018-10927 glusterfs: File status information leak and denial of service
  • BZ - 1612659 - CVE-2018-10928 glusterfs: Improper resolution of symlinks allows for privilege escalation
  • BZ - 1612660 - CVE-2018-10929 glusterfs: Arbitrary file creation on storage server allows for execution of arbitrary code
  • BZ - 1612664 - CVE-2018-10930 glusterfs: Files can be renamed outside volume
  • BZ - 1613143 - CVE-2018-10926 glusterfs: Device files can be created in arbitrary locations
  • BZ - 1615338 - Rebalance status shows wrong count of "Rebalanced-files" if the file has hardlinks
  • BZ - 1615440 - turn off brick multiplexing for stand alone RHGS
  • BZ - 1615911 - [geo-rep]: No such file or directory when a node is shut down and brought back
  • BZ - 1619416 - memory grows until swap is 100% utilized and some brick daemons crashes during creating of large number of small files
  • BZ - 1619538 - Snapshot status fails with commit failure
  • BZ - 1620469 - Brick process NOT ONLINE for heketidb and block-hosting volume
  • BZ - 1620765 - posix_mknod does not update trusted.pgfid.xx xattr correctly
  • BZ - 1622029 - [geo-rep]: geo-rep reverse sync in FO/FB can accidentally delete the content at original master incase of gfid conflict in 3.4.0 without explicit user rmdir
  • BZ - 1622452 - Bricks for heketidb and some other volumes not ONLINE in gluster volume status

CVEs

References